The Suprenum is an MIMD/SIMD system. You can see a 128-processor system in GMD (Gesellschaft fuer Mathematik und Datenverarbeitung, Sankt Augustin).
Each Suprenum computing node comprises a
+ vector-processing unit (20 MFlop/s with chaining)
+ processor for program control (MC 68020)
+ communication processor
+ 8 MB memory
16 such computing nodes are interconnected via a high-speed bus (320 MB/s) to form a cluster.
In addition, each cluster is equipped with a
+ disk-controller node (2GB hard-disk)
+ dedicated node for monitoring and diagnosis
+ 2 communication nodes for linking up with the upper interconnection system
The interconnection system is provided by the Suprenum bus system, which connects
a 'grid' of clusters, toroidally in each direction, by a double serial bus (125 MB/s).
Our greatest configuration comprises 16 clusters interconnected to form a 4x4 matrix, representing 256 computing nodes. The cluster system is accessible by the user via a front-end computer.
You can find more informations in:
U. Trottenberg, K. Solchenbach:
Parallele Algorithmen und ihre Abbildung auf parallele Rechnerarchitekturen.
it 30(2), 1988
U. Trottenberg:
The Suprenum Project: Idea and Current State
Suprenum Report 8
Suprenum GmbH, Bonn, 1988
U. Trottenberg (ed.):
Proceedings of the 2nd International Suprenum Colloquium,
"Supercomputing based on parallel computer architectures".
Parallel Computing, Vol. 7, 3, 1988
H.P. Zima, H.J. Bast, M. Gerndt:
SUPERB: A tool for semi-automatic MIMD/SIMD parallelization.
Parallel Computing, 6, 1988
E. Kehl, K.-D. Oertel, K. Solchenbach, R. Vogelsang:
Application benchmarks on Suprenum.
Supercomputer, March 1991
C.-A. Thole:
Programmieren von Rechnern mit verteiltem Speicher.
PIK 13, 1990
< This is a short introduction in the Suprenum architecture
and Superb >
Greetings, Stephan
Stephan Springstubbe
German National Research Center For Computer Science (GMD)
Department of Supercomputing
Schloss Birlinghoven
53757 Sankt Augustin
Germany
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
From: Alba.Balaniuk@imag.fr (Alba Balaniuk)
Subject: Simulator of a processor network needed
Organization: IMAG Institute, University of Grenoble, France
Hi,
I am interested in shared virtual memory (SVM) mechanisms
for parallel loosely coupled architectures.
I have already designed a SVM server and now I want to implement
and validate it. The problem is that I do not have an adequate
multiprocessor or processor network to use on the implelmentation.
So, does anybody know about a network or a multiprocessor simulator
which I can use for this purpose ? If so, is it a public domain software ?
You can send the answers to my email: Alba.Balaniuk@imag.fr
About a month ago, someone posted the following information regarding
how to acquire the NAS Parallel Benchmarks:
> Questions regarding *serial* versions of the source code go to
> bm-codes@nas.nasa.gov.
> Questions regarding *parallel* versions of the source code go to Eric
> Barszcz, barszcz@nas.nasa.gov.
Regarding the serial versions, I sent email to the above *serial* address, and
I have received no reply. It's been about one month.
Regarding the parallel versions, I sent email to the above *parallel*
address, and I was sent a form to fill out. I filled out and returned
the form. I received no further reply. Again, It's been about one
month.
So does anyone know of another way of obtaining these codes? How
large are they? Can they be sent via email? Is there a site from
which I can ftp them?
thanks
--
Eddie Gornish
University of Illinois - Center for Supercomputing Research & Development
gornish@csrd.uiuc.edu
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
From: liu@issi.com (Guizhong Liu)
Subject: MPP maker companies
Organization: International Software Systems, Inc. ISSI
Reply-To: liu@issi.com
Dear Netters:
I want to contact the companies making MPP supercomputers to get some information. If you know any of them, address, email address or phone numbers, please send reply
Especially I'm interested in their characteristial
SVM-properties.
Therefor I need some literature about
application schemes and classification of algorithms,
not only of parallel ones.
If you know any literature dealing with this subject,
please mail
wittmann@informatik.tu-muenchen.de
Thanks for your help
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
Date: Thu, 7 Oct 93 11:28:33 EDT
From: sumit@galileo.eng.wayne.edu (DPR)
Subject: MasPar MP 1 problems
I am just learning how to use a MasPar MP 1 machine. Could anyone send me some
programs in 'C' and/or MPL which can be executed under the MPPE ? I also have a
doubt about a system error message: problem LICENSE FAILURE: 5 -- no such feature exists As a result of this I cannot execute any program under the MPPE...
Parallel Processing Letters (PPL) aims to disseminate rapidly results in the field of parallel processing in the form of short letters. It will have a wide scope and cover topics such as the design and analysis of parallel and distributed algorithms, the theory of parallel computation, parallel programming languages, parallel programming environments, parallel architectures and VLSI circuits. Original results are published, and experimental results if they contain an analysis corresponding to an abstract model of computation. PPL is be and ideal information vehicle for recent high quality achievements.
Information can be obtained form the Editor in Chief, Professor Michel Cosnard at cosnard@lip.ens-lyon.fr
I may not completely understand the question but here is a simply answer: The FFT is inherently parallel -- an array of size 2^N can simply be broken into two halves -- the points with even indexes and those with odd indexes. Each half can then be transformed using a standard FFT. The results can then be combined to produce the grand transform with one more butterfly. Any signal-processing text will give you the simple mult/add operations of a butterfly.
Naturally, each half could further be broken into two halves and so on... That is actually the basis of the FFT. At the lowest level, a 2-pt transform is simple: If the input array is (f0, f1) then the transform is F0 = (f0+f1), F1 = (f0-f1). Combining these many 2-pt transforms into larger transforms is the job of the butterfly stage -- one stage for each next combination of halves. That is why the FFT requires an array size which is a power of 2.
Scott R. Cannon, PhD scott@cannon.cs.usu.edu
Dept. of Computer Science (801) 750-2015
Utah State Univ. FAX (801) 750-3265
Logan, UT. 84322-4205
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
From: paprzycki_m@gusher.pb.utexas.edu
Subject: Call for Papers
Dear Netters,
If you are interested in giving your company or your work more visibility
behind a former iron curtain, or if you wish to contribute to the growth
of our European colleagues'journal, please consider a submission to the
The past decade has seen the emergence of two highly successful series of CONPAR and of VAPP conferences on the subject of parallel processing. The Vector and Parallel Processors in Computational Sciene meetings were held in Chester (VAPP I, 1981), Oxford (VAPP II, 1984), and Liverpool (VAPP III, 1987). The International Conferences on Parallel Processing took place in Erlangen (CONPAR 81), Aachen (CONPAR 86) and Manchester (CONPAR 88). In 1990 the two series joined together and the CONPAR 90 - VAPP IV con
ference was organized in Zurich. CONPAR 92 - VAPP V took place in Lyon, France.
The next event in the series, CONPAR 94 - VAPP VI, will be organized in 1994 at the University of Linz (Austria) from September 6 to 8, 1994. The format of the joint meeting will follow the pattern set by its predecessors. It is intended to review hardware and architecture developments together with languages and software tools for supporting parallel processing and to highlight advances in models, algorithms and applications software on vector and parallel architectures.
It is expected that the program will cover:
* languages / software tools * automatic parallelization and mapping
* hardware / architecture * performance analysis
* algorithms * applications
* models / semantics * paradigms for concurrency
* testing and debugging * portability
A special session will be organized on Parallel Symbolic Computation.
The proceedings of the CONPAR 94 - VAPP VI conference are intended to be published in the Lecture Notes in Computer Science series by Springer Verlag.
This conference is organized by GUP-Linz in cooperation with RISC-Linz, ACPC and IFSR. Support by GI-PARS, OCG, OGI, IFIP WG10.3, IEEE, ACM, AFCET, CNRS, C3, BCS-PPSG, SIG and other organizations is being negotiated.
Schedule:
Second Announcement and Final Call for Papers October 1993
Submission of complete papers and tuturials Feb 15 1994
Notification of acceptance May 1 1994
Final (camera-ready) version of accepted papers July 1 1994
Paper submittance:
Contributors are invited to send five copies of a full paper not exceeding 15 double-spaced pages in English to the program committee chairman at:
CONPAR 94 - VAPP VI
c/o Prof. B. Buchberger
Research Institute for Symbolic Computation (RISC-Linz)
Johannes Kepler University, A-4040 Linz, Austria
Phone: +43 7236 3231 41, Fax: +43 7236 3231 30
Email: conpar94@risc.uni-linz.ac.at
The title page should contain a 100 word abstract and five specific keywords.
CONPAR/VAPP also accepts and explicitly encourages submission by electronic mail to conpar94@risc.uni-linz.ac.at. Submitted files must be either
* in uuencoded (preferably compressed) DVI format or
* in uuencoded (preferably compressed) Postscript format
Burkhart H. (CH), Cosnard M. (F), Delves L.M. (UK), Ffitch J. (UK), Haring G. (A), Hong H. (A), Jesshope Ch. (UK), Jordan H.F. (USA), Kaltofen E. (USA)., Kleinert W. (A), Kuchlin W. (D), Parkinson D. (UK), Miola A. (I), Mirenkov N. (J), Muraoka Y. (J), Reinartz K.D. (D), Steinhauser O. (A), Wait R. (UK), Wang P. (USA)., Zinterhof P. (A)
Reply Form:
We encourage you to reply via e-mail, giving us the information listed below. If you do not have the possibility to use e-mail, please copy the form below and send it to the conference address.
In article <1993Nov16.152328.4885@hubcap.clemson.edu>, msodhi@agsm.ucla.edu (Mohan Sodhi) writes:
> >Jeff> PLEASE don't forget about portability. If the same code can be
> >Jeff> compiled onto multiple architectures, it will make the
> >Jeff> programmer's job MUCH MUCH easier. (Tune to architecture as
> >Jeff> needed, instead of rewrite from scratch.)
>
> >I agree 100%.
>
> The hopes of the authors' above are a little too pie-in-the sky. For one,
> even for serial computers, porting an application from one operating system
> to another can take _months_ and involve a lot of new code.
That is, if the original application was not designed with portability in mind. We had to port an application of 30.000 lines FORTRAN from one UNIX system to another, and did it within a day. Then we translated the FORTRAN code to C by f2c, and compiled this again without problems on the second UNIX system. Finally we ported the C version to one transputer. As soon as we obtained the transputer version of f2c (that is: as soon as we obtained a FORTRAN compiler for the transputer) this again was no problem.
Conclusion: the application does not
make use of OS specific things.
> Second,
> developing architecture-free algorithms does not mean *no new code* -- it just
> means no new math to be worked out. I do not think it is possible to
> have a program compile under different architectures (even if the algorithm
> is unchanged) with just a few compiler directives; I am not even sure
> this is desirable.
I do think this is possible in the future.
> One thing at a time: let us concentrate on architecture
> free algorithms for now (in my area, operations research, this itself is a
> very new concept!): this will take our minds off the tooth fairy who will
> write a program that will compile under every computer architecture and every
> operating system.
Not all algorithms are _architecture_free_, consider for instance the summation of N variables, or the computation of an inner product. This can be done very well on a hypercube, but not on a linear array or ring of processors. Thus an algorithm that relies heavily on inner products is not _architecture_free_.
If the fastest sequential algorithms are not _architecture_free_, then it is not possible to write efficient, portable code. Conclusion: a scientist must know for which architectures his algorithm is suitable, and (s)he must tune the code to the architecture, or architecture to application.
> Mohan Sodhi
> msodhi@agsm.ucla.edu
>
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
From: taylors@westminster.ac.uk (Simon Taylor)
Subject: Conference: 2ND EUROMICRO WORKSHOP ON PARALLEL AND DISTRIBUTED
In article <1993Nov17.134224.24886@hubcap.clemson.edu>, dbader@eng.umd.edu (David Bader) writes:
> Murray, have you looked at Stone's book? A quick review follows.
> -- definition of granularity in terms of Comp - Comms ratio
Yes (great book!). My point is that communication (in whatever guise)
is such a fundamental part of parallel algorithm design that it seems
unreasonable for manufacturers to say "Here is our new parallel machine. It
requires large grain computations" and then blame designers of parallel
software/algorithms for a failure to "catch up" with them. The gap could be
closed from two directions.
I appreciate that I may be overstating this a bit (!), but then the prior
debate seemed to have been heavily loaded in the other direction.
Murray.
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.arch,comp.parallel
From: ucapcdt@ucl.ac.uk (Christopher David Tomlinson)
Subject: Information on the Connection m/c
Message-ID: <1993Nov17.150636.415092@ucl.ac.uk>
Organization: Bloomsbury Computing Consortium
I am looking for references/information on the connection machine (both cm1 and cm2) in particular anything regarding the design of the processing elements. I would be grateful if anybody could supply me with any leads.
In article <1993Nov24.133229.2463@hubcap.clemson.edu> davec@ecst.csuchico.edu (Dave Childs) writes:
>I am trying to track down a good concurrent language for teaching concurrent
>concepts. Something based on C, C++, or pascal would work nicely, but we
>are open to other possibilities.
You might like to try Distributed C freely available from the Universitat Muenchen and also another good system is PCN( It has more documentations and books)
freely available from the Argonne National laboratory and Caltech.
Organization: State University of New York at Albany
In article <1993Nov24.133229.2463@hubcap.clemson.edu> davec@ecst.csuchico.edu (Dave Childs) writes:
>I am trying to track down a good concurrent language for teaching concurrent
>concepts. Something based on C, C++, or pascal would work nicely, but we
>are open to other possibilities.
You might like to try Distributed C freely available from the Universitat Muenchen and also another good system is PCN( It has more documentations and books)
freely available from the Argonne National laboratory and Caltech.
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
From: mstalzer@pollux.usc.edu (Mark Stalzer)
Subject: Re: looking for a Travelling Salesman code for CM
Organization: University of Southern California, Los Angeles, CA
Organization: CONVEX Computer Corporation, Richardson, Tx., USA
Date: Fri, 3 Dec 1993 16:57:13 GMT
X-Disclaimer: This message was written by a user at CONVEX Computer
Corp. The opinions expressed are those of the user and
not necessarily those of CONVEX.
Apparently-To: hypercube@hubcap.clemson.edu
Can someone tell me what the latest version of Load Balancer is? My literature
is from 1992 and I fear it is out of date? I'm most interested in
understanding what new features might have been added in a later release.
Thanks,
Jean
suplick@convex.com
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
Date: Fri, 3 Dec 93 13:39:59 -0700
From: mdb528@michigan.et.byu.edu (Matthew D. Bennett)
Organization: Brigham Young University, Provo UT USA
Subject: Distributed Computing Environments
Keywords: distributed computing
A fellow student and myself are about to embark on a development effort to build a preprossesor to help develop distributed programs. If there is something already out there, I would be very appreciative on some information. If there is nothing out there, but you are interested in our project, please send me some mail and I will forward information on the preproccesor and how to get a copy of it.
Distributed C Development Environment now available for LINUX and UNICOS
The Distributed C Development Environment is now available for
AT 386/486 running LINUX and for Cray supercomputers running UNICOS.
The environment is stored at ftp.informatik.tu-muenchen.de in the
directory /local/lehrstuhl/eickel/Distributed_C. You can get it by
anonymous ftp.
-----
The Distributed C Development Environment was developed at Technische
Universitaet Muenchen, Germany, at the chair of Prof. Dr. J. Eickel and is
a collection of tools for parallel and distributed programming on single-
processor-, multiprocessor- and distributed-UNIX-systems, especially on
heterogenous networks of UNIX computers. The environment's main purpose
is to support and to simplify the development of distributed applications
on UNIX networks. It consists of a compiler for a distributed programming
language, called Distributed C, a runtime library and several useful tools.
The programming model is based on explicit concurrency specification in the
programming language DISTRIBUTED C, which is an extension of standard C.
The language constructs were mainly taken from the language CONCURRENT C
developed by N. Gehani and W. D. Roome and are based on the concepts for
parallel programming implemented in the language ADA. Distributed C makes
possible the common programming in C together with the user-friendly
programming of process management, i. e. the specification, creation,
synchronization, communication and termination of concurrently executed
processes.
The Distributed C Development Environment supports and simplifies the dis-
tributed programming in several ways:
o Development time is reduced by checking Distributed C programs for
errors during compilation. Because of that, errors within communication
or synchronization actions can be easier detected and avoided.
o Programming is simplified by allowing the use of simple pointer types
even on loosely-coupled systems. This is perhaps the most powerful
feature of Distributed C. In this way, dynamic structures like chained
lists or trees can be passed between processes elegantly and easily -
even in heterogeneous networks. Only the anchor of a dynamic structure
must be passed to another process. The runtime system automatically
allocates heap space and copies the complete structure.
o Developement is user-friendly by supporting the generation and
installation of the executable files. A special concept was developed
for performing the generation and storage of binaries by local and
remote compilation in heterogeneous UNIX-networks.
o Programming difficulty is reduced by software-aided allocating processes
at runtime. Only the system administrator needs to have special knowledge
about the target system's hardware. The user can apply tools to map the
processes of a Distributed C program to the hosts of a concrete target
system.
o Execution time is reduced by allocating processes to nodes of a network
with a static load balancing strategy.
o Programming is simplified because singleprocessor-, multiprocessor- and
distributed-UNIX-systems, especially homogeneous and heterogeneous UNIX-
networks can be programmed fully transparently in Distributed C.
The Distributed C Development Environment consists mainly of the tools:
o Distributed C compiler (dcc):
compiles Distributed C to standard C.
o Distributed C runtime library (dcc.a):
contains routines for process creation, synchonization, ...
o Distributed C administration process (dcadmin):
realizes special runtime features.
o Distributed C installer program (dcinstall):
performes the generation and storage of binaries by local
and remote compilation in heterogeneous UNIX-networks.
The environment runs on the following systems:
o Sun SPARCstations (SunOS),
o Hewlett Packard workstations (HP/UX),
o IBM workstations (AIX),
o IBM ATs (SCO XENIX, SCO UNIX, LINUX),
o Convex supercomputers (ConvexOS),
o Cray supercomputers (Unicos),
o homogeneous and heterogeneous networks of the systems as mentioned above.
Moreover the implementation was designed for the use on Intel iPSC/2s.
The Distributed C Development Environment source code is provided "as is"
as free software and distributed in the hope that it will be useful,
but without warranty of any kind.
--
Christoph Pleier
pleierc@informatik.tu-muenchen.de
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
From: janr@fwi.uva.nl (Jan de Ronde)
Subject: Hockney's f(1/2)
Date: 21 Dec 1993 15:57:41 GMT
Organization: FWI, University of Amsterdam
Nntp-Posting-Host: wendy.fwi.uva.nl
Summary: How to derive r(inf) and n(1/2) arithmetic - memory refer. non overlaap
Keywords: characterization of performance
Dear all,
I'm currently reading various articles of Roger Hockney all
concerning the performance characterization using n(1/2), etc... Including the
book PARALLEL Computers 2.
I'm looking for the derivation of the forms for the r(infinity) and n(1/2)
for an arithmetic pipeline in which the peak performance cannot be realized
due to the data transfer to and from memory.
The situation of a memory bottleneck can be approximately modelled by
considering a memory acces pipeline desribed by the parameters (r(inf)m, n(1/2)m)feeding data to a local memory, from which an arithmetic pipeline operates
described by the parameters (r(inf)a, n(1/2)a).
He says that when one is intersted in the average performance (r(inf), n(1/2))
of the combined memory and arithmetic pipeline, and it is not possible to
overlap memory transfers with arithmetic, a little algebra shows that:
r(inf)= r(inf)a/(1+f(1/2)/f) = r(peak) pipe (f/f(1/2)) and
n(1/2)= (n(1/2)m + x n(1/2)a)/ (1+x)
where r(peak) = r(inf)a
f(1/2) = r(inf)a/r(inf)m
x = f/f(1/2)
Is there anyone who has written this out entirely? Or knows how to?
I would be grateful for response on this subject.
Jan de Ronde,
University of Amsterdam.
Literature:
Parameterization of Computer performance: R.W. Hockney, Parallel Computing (5)
1987, North Holland
Parallel Computers 2 : Hockney and Jesshope (1988) (Book )
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
From: dowd@acsu.buffalo.edu (Patrick Dowd)
Subject: Advance Program - MASCOTS94
Originator: dowd@mangrove.eng.buffalo.edu
Keywords: MASCOTS94
Sender: nntp@acsu.buffalo.edu
Nntp-Posting-Host: mangrove.eng.buffalo.edu
Reply-To: dowd@eng.buffalo.edu
Organization: State University of New York at Buffalo
Date: Tue, 21 Dec 1993 22:11:16 GMT
Apparently-To: comp-parallel@cis.ohio-state.edu
Attached below the advance program of MASCOTS'94 which will be held
January 31 - February 2, 1994 in Durham, NC.
The conference will be held in the Washington Duke Inn and Golf Club.
Note that hotel reservations must be made prior to January 5 to
guarantee reservations at the low conference rate of $80/day. The
hotel telephone numbers are +1.919.490.0999 or +1.800.443.3853.
Conference registration information is included at the end of this
note, or contact Margrid Krueger at mak@ee.duke.edu for additional
Edwards, J. "A parallel implementation of the Painter's algorithm for transputer networks". Applications of Transputers 3. Proceedings of the Third International
Conference on Applications of Transputers, p.736-741, 1991.
O. Friedler, M.R. Stytz, "Dynamic detection of hidden-surfaces using a MIMD multiprocessor". Proceedings of Third Annual IEEE Symposium on Computer-Based Medical
Systems, p.44-51, 1990.
M. Blonk, W.F. Bronsvoort, F. Bruggeman, L. de Vos, "A parallel system for CSG hidden-surface elimination". Parallel Processing. Proceedings of the IFIP WG 10.3 Working Conference, p. 139-152, 1988.
If somebody have one of these or know where I can get them, please contact me !!!
Thank you
Pascal Langlais
etudiant gradue
departement d'informatique
Universite Laval
langlais@ift.ulaval.ca
Approved: parallel@hubcap.clemson.edu
Follow-up: comp.parallel
Path: bounce-back
Newsgroups: comp.parallel
From: beth@osc.edu (Beth Johnston)
Subject: OSC to buy CRAY T3D
Date: 22 Dec 1993 14:05:44 -0500
Organization: The Ohio Supercomputer Center
Summary: Ohio Supercomputer Center to buy MPP
FOR RELEASE: DECEMBER 20, 1993
CONTACTS:
OSC: Cheryl Johnson, 614-292-6067
Cray/Media: Steve Conway, 612-683-7133
Cray/Financial: Bill Gacki, 612-683-7372
OHIO SUPERCOMPUTER CENTER ORDERS CRAY T3D
MASSIVELY PARALLEL PROCESSING SYSTEM
COLUMBUS, Ohio, Dec. 20, 1993 -- The Ohio Supercomputer Center (OSC) and
Cray Research, Inc. (NYSE: CYR) today announced an agreement under which
OSC will acquire a 32-processor, "entry-level" version of the CRAY T3D
massively parallel processing (MPP) system. The new CRAY system will fit well
into OSCUs existing Y-MP8/864 and Y-MP-EL/332 computing environment. The
agreement calls for OSC and Cray Research to use the new systems to collaborate
on advanced research projects including medical imaging. Financial terms were
not disclosed.
Under the agreement, a 32-processor, air-cooled CRAY T3D system is scheduled
to be installed at the OSC facility in Columbus in second-quarter 1994. The
system will be closely coupled with a CRAY Y-MP2E parallel vector
supercomputer system slated for installation at the same time, said OSC
director Dr. Charles F. Bender. "This agreement will create a heterogeneous
computing environment that combines the strengths of traditional parallel
vector supercomputing with the new capabilities of MPP."
According to Dr. Bender, the 32-processor system is the smallest version of the
CRAY T3D product line, which is available in sizes up to 2048 processors. "This
entry-level version will enable us to test the applicability of massively
parallel processing to the important research projects of Ohio industry and
higher education." he said.
As an example, the primary goal of the medical imaging research project is to
develop faster, more accurate methods for transferring and analyzing images
gained from MRI (magnetic resonance imaging) and other digital medical imaging
technologies. "We want to achieve real-time medical imaging, which could have
very significant impact on diagnosis, surgery planning, and medical education,"
said Dr. Bender.
The research collaboration calls for OSC to establish a multi-disciplinary team
consisting of existing staff with expertise in systems programming, training,
computational chemistry, computational fluid flow, and finite element analysis.
Cray Research would provide training as well as staff to collaborate on the
project, which has a three-year duration.
"Over the years, OSC and CRI have had many successful joint research projects
and we are pleased that OSC, which is already a Cray customer, has chosen to
continue its relationship with Cray Research and our heterogeneous CRAY T3D
system on this innovative research project," said Cray chairman and CEO John F.
Carlson. "We will fully support the goals of this collaboration."
OSC is a state-funded shared resource of high performance computing available
to scientists and engineers, both academic and commercial. Since 1987, OSC has
been committed to providing the latest computational tools and technologies to
industry and higher education.
Cray Research creates the most powerful, highest-quality computational tools
for solving the worldUs most challenging scientific and industrial problems.